Goto

Collaborating Authors

 Covington


The Behavior of Large Language Models When Prompted to Generate Code Explanations

Oli, Priti, Banjade, Rabin, Chapagain, Jeevan, Rus, Vasile

arXiv.org Artificial Intelligence

This paper systematically investigates the generation of code explanations by Large Language Models (LLMs) for code examples commonly encountered in introductory programming courses. Our findings reveal significant variations in the nature of code explanations produced by LLMs, influenced by factors such as the wording of the prompt, the specific code examples under consideration, the programming language involved, the temperature parameter, and the version of the LLM. However, a consistent pattern emerges for Java and Python, where explanations exhibit a Flesch-Kincaid readability level of approximately 7-8 grade and a consistent lexical density, indicating the proportion of meaningful words relative to the total explanation size. Additionally, the generated explanations consistently achieve high scores for correctness, but lower scores on three other metrics: completeness, conciseness, and specificity.


Integrating machine learning concepts into undergraduate classes

Sahu, Chinmay, Ayotte, Blaine, Banavar, Mahesh K.

arXiv.org Artificial Intelligence

In this innovative practice work-in-progress paper, we compare two different methods to teach machine learning concepts to undergraduate students in Electrical Engineering. While machine learning is now being offered as a senior-level elective in several curricula, this does not mean all students are exposed to it. Exposure to the concepts and practical applications of machine learning will assist in the creation of a workforce ready to tackle problems related to machine learning, currently a hot topic in industry. Preliminary assessments indicate that this approach promotes student learning. While students prefer the proposed side-by-side teaching approach, numerical comparisons show that the workshop approach may be more effective for student learning, indicating that further work in this area is required.


Artificial Intelligence Startup Wyzerr Launches Platform to Create Surveys That Feel Like Games

#artificialintelligence

Artificial intelligence start-up Wyzerr announced today the launch of an online platform to create feedback surveys that feel like games. The surveys, which the company calls "Smart Forms," can capture up to 25 questions in under 60 seconds. Wyzerr's Smart Forms were developed based on playful design principals and rules of engagement. Wyzerr spent the past 2 years piloting their gammified surveys with large enterprises like Walmart, Unilever, and Volkswagen. The key learnings from those pilots served as the inspiration for the company's new online Smart Form builder, which guides customers in creating effective feedback campaigns.